-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For Mobilenet V4 large #821
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
Kudos, SonarCloud Quality Gate passed! |
@mohitmundhragithub I fixed a stupid bug for floating model. Please try again. |
Thanks a lot @freedomtan to share this. I have added one more commit to modify the input images required for Calibration set for PTQ. Please have look. |
@anhappdev For MobileNet EdgeTPU, the input tensor is 224x224x3; for newly introduced model, it's 384x384x3. What's the best way to have both them? Add new dataset type, e.g., imagenet_384? Or, we should add width and height paramters? |
Where should we have both of them? Do you mean the There is already an mobile_app_open/flutter/cpp/proto/mlperf_task.proto Lines 95 to 108 in e02bdc8
I |
nope,
We hard-coded image size for imagenet dataset at https://github.com/mlcommons/mobile_app_open/blob/master/flutter/cpp/flutter/dart_run_benchmark.cc#L73-L77 switch (in->dataset_type) {
case ::mlperf::mobile::DatasetConfig::IMAGENET:
dataset = std::make_unique<::mlperf::mobile::Imagenet>(
backend.get(), in->dataset_data_path, in->dataset_groundtruth_path,
in->dataset_offset, 224, 224 /* width, height */);
break;
... |
In that case, we just need to update the code to read the image size from the task file? |
Yes, that's a better solution. A dirty hack is to add something like switch (in->dataset_type) {
case ::mlperf::mobile::DatasetConfig::IMAGENET:
dataset = std::make_unique<::mlperf::mobile::Imagenet>(
backend.get(), in->dataset_data_path, in->dataset_groundtruth_path,
in->dataset_offset, 224, 224 /* width, height */);
break; How to update our code to pass |
I can send a PR to pass |
@freedomtan Can you rebase this PR on the |
sure, I'll do it. |
49d73aa
to
07c1213
Compare
3570a1a
to
fd82641
Compare
fixed a stupid bug
@RSMNYS The CoreML model for MobilenetV4 Large converted TensorFlow doesn't perform as good as as expected, this could be used a case for optimizing CoreML models. |
@anhappdev Any comments of how to the 6 failed tests? |
|
They mostly failed on expected accuracy or throughput. I will run them several times and update the expected values. |
Currently, the |
* Update S3 provider in android-build-test.yml * Update env var name --------- Co-authored-by: Anh <[email protected]>
sometimes it failed, sometimes all tests passed tested on Pixel 7 Pro running latest Android 14. an "all tests passed" example was uploaded to Firebase,
|
Let's test the TFLite backend on more devices to see if we can figure out why we got accuracy equals 0 on Pixel 5. |
On a Samsung Galaxy S22+ (Exynos 2200), with TFLite backend only, I got expected accuracy numbers. |
Pixel backend:
TFLite backend:
|
That's interesting. I guess for Pixel backend + image_classification_offline_v2, we can get expected accuracy by reducing batch size. For Pixel 5, either API level or other software stack issues? |
The Pixel 5 on Firebase Test Lab has only API Level 30 available, so I cannot test it with other API levels. |
Quality Gate passedIssues Measures |
@freedomtan is this model the one we need to work with: https://github.com/mlcommons/mobile_open/raw/main/vision/mobilenetV4? or this savedModel: https://github.com/mlcommons/mobile_open/releases/tag/model_upload? |
Either one, or say, neither of them. We convert from the saved_model to CoreML model. The performance of the converted CoreML model is not as good as expected. Please check how we can get a better CoreML model from either tflite of saved_model. |
@freedomtan are we expecting to have the MobilenetV4 performance on the task image_classification_V2 similar as for the https://github.com/mlcommons/mobile_models/raw/main/v3_0/CoreML/MobilenetEdgeTPU.mlmodel which Is used for the image_classification task (currently near 1000 qps)? |
as far as I can tell, 1000 qps is too good to be true for Mobilenet V4 Large. But 300 qps should be possible, I guess. |
Use Mobilenet V4 large for Image Classification V2.